Saddle Point Seeking for Convex Optimization Problems
نویسندگان
چکیده
In this paper, we consider convex optimization problems with constraints. By combining the idea of a Lie bracket approximation for extremum seeking systems and saddle point algorithms, we propose a feedback which steers a single-integrator system to the set of saddle points of the Lagrangian associated to the convex optimization problem. We prove practical uniform asymptotic stability of the set of saddle points for the extremum seeking system for strictly convex as well as linear programs. Using a numerical example we illustrate how the approach can be used in distributed optimization problems.
منابع مشابه
An efficient modified neural network for solving nonlinear programming problems with hybrid constraints
This paper presents the optimization techniques for solving convex programming problems with hybrid constraints. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalleinvariance principle, a neural network model is constructed. The equilibrium point of the proposed model is proved to be equivalent to the optima...
متن کاملCommunication-Efficient Distributed Primal-Dual Algorithm for Saddle Point Problem
Primal-dual algorithms, which are proposed to solve reformulated convex-concave saddle point problems, have been proven to be effective for solving a generic class of convex optimization problems, especially when the problems are ill-conditioned. However, the saddle point problem still lacks a distributed optimization framework where primal-dual algorithms can be employed. In this paper, we pro...
متن کامل2 1 M ay 2 01 4 On solving large scale polynomial convex problems by randomized first - order algorithms ∗
One of the most attractive recent approaches to processing well-structured large-scale convex optimization problems is based on smooth convex-concave saddle point reformulation of the problem of interest and solving the resulting problem by a fast First Order saddle point method utilizing smoothness of the saddle point cost function. In this paper, we demonstrate that when the saddle point cost...
متن کاملOn Solving Large-Scale Polynomial Convex Problems by Randomized First-Order Algorithms
One of the most attractive recent approaches to processing well-structured large-scale convex optimization problems is based on smooth convex-concave saddle point reformulation of the problem of interest and solving the resulting problem by a fast first order saddle point method utilizing smoothness of the saddle point cost function. In this paper, we demonstrate that when the saddle point cost...
متن کاملAccelerated gradient sliding for structured convex optimization
Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...
متن کامل